Rscript
. It allows for piping as well for shebang
scripting via #!
, uses command-line arguments more
consistently and still starts
faster. It also always loaded the methods
package which
Rscript
only began to do in recent years.
littler
lives on Linux and Unix, has its difficulties on macOS due to
yet-another-braindeadedness there (who ever thought case-insensitive
filesystems as a default were a good idea?) and simply does not exist on
Windows (yet the build system could be extended see RInside for
an existence proof, and volunteers are welcome!). See the FAQ
vignette on how to add it to your PATH
. A few examples
are highlighted at the Github repo:, as well
as in the examples
vignette.
This release contains a fair number of small changes and improvements
to some of the example scripts is run daily.
The full change description follows.
My CRANberries service provides a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page, and also on the package docs website. The code is available via the GitHub repo, from tarballs and now of course also from its CRAN page and viaChanges in littler version 0.3.19 (2023-12-17)
- Changes in examples scripts
- The help or usage text display for
r2u.r
,ttt.r
,check.r
has been improved, expanded or corrected, respectivelyinstallDeps.r
has a new argument for dependency selection- An initial 'single test file' runner
tttf.r
has been addedr2u.r
has two new options for setting / varying the Debian build version of package that is built, and one for BioConductor builds, one for a 'dry run' build, and a new--compile
optioninstallRSPM.r
,installPPM.r
,installP3M.r
have been updates to reflect the name changesinstallRub.r
now understands 'package@universe' toott.r
flips the default of the--effects
switch
install.packages("littler")
. Binary packages are
available directly in Debian as
well as (in a day or two) Ubuntu binaries at
CRAN thanks to the tireless Michael Rutter.
Comments and suggestions are welcome at the GitHub repo.
If you like this or other open-source work I do, you can sponsor me at
GitHub.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the quick overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in RProtoBuf version 0.4.21 (2022-12-13)
- Package now builds with ProtoBuf >= 22.x thanks to Matteo Gianella (#93 addressing #92).
- An Alpine 3.19-based workflow was added to test this in continuous integration thanks to a suggestion by Sebastian Meyer.
- A large number of old-style
.Call
were updated (#96).- Several packaging, dcoumentation and testing items were updated.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
==
)
as distinct.
The issue relates to the design of our data-type representing programs, in
particular, a consequence of our choice to outsource the structural aspect to a
3rd-party library (Algebra.Graph
). The Graph library deems nodes that compare
as equivalent (again with ==
) to be the same. Since a stream-processing
program may contain many operators which are equivalent, but distinct, we
needed to add a field to our payload type to differentiate them: so we opted
for an Integer field, vertexId
(something I've described as a "wart"
elsewhere)
Here's a simplified example of our existing payload type, StreamVertex
:
data StreamVertex = StreamVertex
vertexId :: Int
, operator :: StreamOperator
, parameters :: [ExpQ]
, intype :: String
, outtype :: String
A rewrite rule might introduce or eliminate operators from a stream-processing
program. For example, consider the rule which "hoists" a filter upstream from
a merge operator. In pseudo-Haskell,
streamFilter p . streamMerge [a , b, ...]
=>
streamMerge [ streamFilter p a
, streamFilter p b
, ...]
The original streamFilter
is removed, and new streamFilter
s are introduced,
one per stream arriving at streamMerge
. In general, rules may need to
synthesise new operators, and thus new vertexId
s.
Another rewrite rule might perform the reverse operation. But the individual
rules operate in isolation: and so, the program variant that results after
applying a rule and then applying an inverse rule may not have the same
vertexId
s, or the same order of vertexId
s, as the original program.
I thought of the outline of two possible solutions to this.
"well-numbered" StreamGraph
s
The first was to encode (and enforce) some rules about how vertexId
s are used.
If they always began from (say) 1, and were strictly-ascending from the
source operator(s), and rewrite rules guaranteed that a "well numbered" input
would be "well numbered" after rewriting, this would be sufficient to rule out
a rewritten-but-semantically-equivalent program being considered distinct.
The trouble with this approach is using properties of a numerical system
built around vertexId
as a stand-in for the real structural problem. I
was not sure I could prove both that the stand-in system was sound and
that it was a proper analogue for the underlying structural issue.
It feels to me more that the choice to use an external library to
encode the structure of a stream-processing program was the issue: the
structure itself is a fundamental part of the semantics of the program. What if
we had encoded the structure of programs within the same data-type?
alternative data-type
StrIoT programs are trees. The root is the sink node:
there is always exactly one. There can be multiple source (leaf nodes), but
they always converge. Operators can have multiple inputs (including zero).
The root node has no output, but all other operators have exactly one.
I explored transforming StreamVertex
into a tree by adding a field
representing incoming streams, and dispensing with Graph
and vertexId
.
Something like this
data StreamProg = StreamProg StreamOperator [Exp] String String [StreamProg]
A uni-directional transformation from Graph StreamVertex
to StreamProg
is all that's needed to implement something like ==
, so we don't need
to keep track of vertexId
mappings. Unfortunately, we can't fix the
actual Eq (Graph StreamVertex)
implementation this way: it delegates
to Eq StreamVertex
, and we just don't have enough information to fix
the problem at that level. But, we can write a separate graphEq
and
use that instead where we need to.
could I go further?
Spoiler: I haven't. But I've been sorely tempted.
We still have a separate StreamOperator
type, which it would be nice to fold
in; and we still have to use
a list around the incoming nodes, since different operators accept different
numbers of incoming streams. It would be better to encode the correct valences
in the type.
In 2020 I explored iteratively reducing the StreamVertex
data-type to try and
get it as close as possible to the ideal end-user API: simple functions. I
wrote about one step along that path in Template Haskell and
Stream-processing programs, but concluded that,
since this was not my main PhD focus, I wouldn't go further. But it was
nagging at my subconcious ever since.
I allowed myself a couple of days exploring some advanced concepts including
typed Template Haskell (that has had some developments since 2020), generalised
abstract data types (GADTs) and more generic programming to see what could be
achieved.
I'll summarise all that in the next blog post.
drm-fixes-<date>
.
2) Examine the issue tracker: Confirm that your issue isn t already
documented and addressed in the AMD display driver issue tracker. If you find a
similar issue, you can team up with others and speed up the debugging process.
[drm] Display Core v...
, it s not likely a display driver issue. If this
message doesn t appear in your log, the display driver wasn t fully loaded and
you will see a notification that something went wrong here.[drm] Display Core v3.2.241 initialized on DCN 2.1
[drm] Display Core v3.2.237 initialized on DCN 3.0.1
drivers/gpu/drm/amd/display/dc/dcn301
. We all know
that the AMD s shared code is huge and you can use these boundaries to rule out
codes unrelated to your issue.
7) Newer families may inherit code from older ones: you can find dcn301
using code from dcn30, dcn20, dcn10 files. It s crucial to verify which hooks
and helpers your driver utilizes to investigate the right portion. You can
leverage ftrace
for supplemental validation. To give an example, it was
useful when I was updating DCN3 color mapping to correctly use their new
post-blending color capabilities, such as:
Additionally, you can use two different HW families to compare behaviours.
If you see the issue in one but not in the other, you can compare the code and
understand what has changed and if the implementation from a previous family
doesn t fit well the new HW resources or design. You can also count on the help
of the community on the
Linux AMD issue tracker
to validate your code on other hardware and/or systems.
This approach helped me debug
a 2-year-old issue
where the cursor gamma adjustment was incorrect in DCN3 hardware, but working
correctly for DCN2 family. I solved the issue in two steps, thanks for
community feedback and validation:
drivers/gpu/drm/amd/display/dc/dcn*/dcn*_resource.c
file. More precisely in
the dcn*_resource_construct()
function.
Using DCN301 for illustration, here is the list of its hardware caps:
/*************************************************
* Resource + asic cap harcoding *
*************************************************/
pool->base.underlay_pipe_index = NO_UNDERLAY_PIPE;
pool->base.pipe_count = pool->base.res_cap->num_timing_generator;
pool->base.mpcc_count = pool->base.res_cap->num_timing_generator;
dc->caps.max_downscale_ratio = 600;
dc->caps.i2c_speed_in_khz = 100;
dc->caps.i2c_speed_in_khz_hdcp = 5; /*1.4 w/a enabled by default*/
dc->caps.max_cursor_size = 256;
dc->caps.min_horizontal_blanking_period = 80;
dc->caps.dmdata_alloc_size = 2048;
dc->caps.max_slave_planes = 2;
dc->caps.max_slave_yuv_planes = 2;
dc->caps.max_slave_rgb_planes = 2;
dc->caps.is_apu = true;
dc->caps.post_blend_color_processing = true;
dc->caps.force_dp_tps4_for_cp2520 = true;
dc->caps.extended_aux_timeout_support = true;
dc->caps.dmcub_support = true;
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1;
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1;
dc->caps.color.dpp.dgam_ram = 0; // must use gamma_corr
dc->caps.color.dpp.dgam_rom_caps.srgb = 1;
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1;
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1;
dc->caps.color.dpp.dgam_rom_caps.pq = 1;
dc->caps.color.dpp.dgam_rom_caps.hlg = 1;
dc->caps.color.dpp.post_csc = 1;
dc->caps.color.dpp.gamma_corr = 1;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1;
dc->caps.color.dpp.ogam_ram = 1;
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1;
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; //2
dc->caps.color.mpc.ogam_ram = 1;
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1;
dc->caps.dp_hdmi21_pcon_support = true;
/* read VBIOS LTTPR caps */
if (ctx->dc_bios->funcs->get_lttpr_caps)
enum bp_result bp_query_result;
uint8_t is_vbios_lttpr_enable = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_caps(ctx->dc_bios, &is_vbios_lttpr_enable);
dc->caps.vbios_lttpr_enable = (bp_query_result == BP_RESULT_OK) && !!is_vbios_lttpr_enable;
if (ctx->dc_bios->funcs->get_lttpr_interop)
enum bp_result bp_query_result;
uint8_t is_vbios_interop_enabled = 0;
bp_query_result = ctx->dc_bios->funcs->get_lttpr_interop(ctx->dc_bios, &is_vbios_interop_enabled);
dc->caps.vbios_lttpr_aware = (bp_query_result == BP_RESULT_OK) && !!is_vbios_interop_enabled;
git log
and git blame
to identify commits
targeting the code section you re interested in.
10) Track regressions: If you re examining the amd-staging-drm-next
branch, check for regressions between DC release versions. These are defined by
DC_VER
in the drivers/gpu/drm/amd/display/dc/dc.h
file. Alternatively,
find a commit with this format drm/amd/display: 3.2.221
that determines a
display release. It s useful for bisecting. This information helps you
understand how outdated your branch is and identify potential regressions. You
can consider each DC_VER
takes around one week to be bumped. Finally, check
testing log of each release in the report provided on the amd-gfx
mailing
list, such as this one Tested-by: Daniel Wheeler
:
sudo bash -c "echo high > /sys/class/drm/card0/device/power_dpm_force_performance_level"
/* Surface update type is used by dc_update_surfaces_and_stream
* The update type is determined at the very beginning of the function based
* on parameters passed in and decides how much programming (or updating) is
* going to be done during the call.
*
* UPDATE_TYPE_FAST is used for really fast updates that do not require much
* logical calculations or hardware register programming. This update MUST be
* ISR safe on windows. Currently fast update will only be used to flip surface
* address.
*
* UPDATE_TYPE_MED is used for slower updates which require significant hw
* re-programming however do not affect bandwidth consumption or clock
* requirements. At present, this is the level at which front end updates
* that do not require us to run bw_calcs happen. These are in/out transfer func
* updates, viewport offset changes, recout size changes and pixel
depth changes.
* This update can be done at ISR, but we want to minimize how often
this happens.
*
* UPDATE_TYPE_FULL is slow. Really slow. This requires us to recalculate our
* bandwidth and clocks, possibly rearrange some pipes and reprogram
anything front
* end related. Any time viewport dimensions, recout dimensions,
scaling ratios or
* gamma need to be adjusted or pipe needs to be turned on (or
disconnected) we do
* a full update. This cannot be done at ISR level and should be a rare event.
* Unless someone is stress testing mpo enter/exit, playing with
colour or adjusting
* underscan we don't expect to see this call at all.
*/
enum surface_update_type
UPDATE_TYPE_FAST, /* super fast, safe to execute in isr */
UPDATE_TYPE_MED, /* ISR safe, most of programming needed, no bw/clk change*/
UPDATE_TYPE_FULL, /* may need to shuffle resources */
;
amd64
, arm64
, armhf
, i386
, ppc64el
, riscv64
and s390
for Debian trixie, unstable and experimental, this is only around 500GB ie. less than 1%. Although the new service not yet ready for usage, it has already provided a promising outlook in this regard. More information is available on https://rebuilder-snapshot.debian.net and we hope that this service becomes usable in the coming weeks.
The adjacent picture shows a sticky note authored by Jan-Benedict Glaw at the summit in Hamburg, confirming Holger Levsen s theory that rebuilding all Debian packages needs a very small subset of packages, the text states that 69,200 packages (in Debian sid) list 24,850 packages in their .buildinfo
files, in 8,0200 variations. This little piece of paper was the beginning of rebuilder-snapshot and is a direct outcome of the summit!
The Reproducible Builds team would like to thank our event sponsors who include Mullvad VPN, openSUSE, Debian, Software Freedom Conservancy, Allotropia and Aspiration Tech.
[ ] introduce the concepts of Reproducible Builds, including best practices for developing and releasing software, the tools available to help diagnose issues, and touch on progress towards solving decades-old deeply pervasive fundamental security issues Learn how to verify and demonstrate trust, rather than simply hoping everything is OK!Germane to the contents of the talk, the slides for Vagrant s talk can be built reproducibly, resulting in a PDF with a SHA1 of
cfde2f8a0b7e6ec9b85377eeac0661d728b70f34
when built on Debian bookworm and c21fab273232c550ce822c4b0d9988e6c49aa2c3
on Debian sid at the time of writing.
[ ] today I hold in my hands the first two bit-identical LibreOffice rpm packages. And this is the success I wanted to share with you all today [and] it makes me feel as if we can solve anything.
esp32c3
microcontroller firmware reproducible with Rust, repro-env and Arch Linux:
I chose theesp32c3
[board] because it has good Rust support from theesp-rs
project, and you can get a dev board for about 6-8 . To document my build environment I usedrepro-env
together with Arch Linux because its archive is very reliable and contains all the different Rust development tools I needed.
dump
command and hopes that someone may be able to help.
amd64
, arm64
, i386
and armhf
architectures, data is collected from the Reproducible Builds testing framework is collected by this migration software even though, at the time of writing, it neither causes nor migration bonuses nor blocks migration. Indeed, the information only results are visible on Britney s excuses as well as on individual packages pages on tracker.debian.org.
.buildinfo
files
Back in 2017, Steve Langasek filed a bug against Ubuntu s Launchpad code hosting platform to report that .changes
files (artifacts of building Ubuntu and Debian packages) reference .buildinfo
files that aren t actually exposed by Launchpad itself. This was causing issues when attempting to process .changes
files with tools such as Lintian. However, it was noticed last month that, in early August of this year, Simon Quigley had resolved this issue, and .buildinfo
files are now available from the Launchpad system.
composer.lock
file, ensuring total reproducibility of the shipped binary file. Further details and the discussion that went into their particular implementation can be found on the associated GitHub pull request.
In addition, the presentation Leveraging Nix in the PHP ecosystem has been given in late October at the PHP International Conference in Munich by Pol Dellaiera. While the video replay is not yet available, the (reproducible) presentation slides and speaker notes are available.
7z
. [ ]RequiredToolNotFound
import. [ ]252
to Debian unstable. [ ]SOURCE_DATE_EPOCH
and CMake [ ], added iomart (ne Bytemark) and DigitalOcean to our sponsors page [ ] and dropped an unnecessary link on some horizontal navigation buttons [ ].
amber-cli
(date-related issue)bin86
(FTBFS-2038)buildah
(timestamp)colord
(CPU)google-noto-fonts
(file modification issue)grub2
(directory-related metadata)guile-fibers
(parallelism issue)guile-newt
(parallelism issue)gutenprint
(embedded date/hostname)hub
(random build path)ipxe
(nondeterministic behavoiour)joker
/ joker
kopete
(undefined behaviour)kraft
(embedde hostname)libcamera
(signature)libguestfs
(embeds build host file)llvm
(toolchain/Rust-related issue)nfdump
(date-related issue)ovmf
(unknown cause)quazip
(missing fonts)rdflib
(nondeterminstic behaviour)rpm
(toolchain)tigervnc
(embedded an RSA signature)whatsie
(date-related issue)xen
(time-related issue)policycoreutils
(sort-related issue)python-ansible-pygments
.bidict
.meson
.radsecproxy
.taffybar
.php-doc
.pelican
.maildir-utils
.openmrac-data
.vectorscan
.Priority: important
in a new package set. [ ][ ]pool_buildinfos
script to be re-run for a specific year. [ ]osuosl4
node [ ][ ] along with lynxis [ ].amd64
Ionos builders from 48 GiB to 64 GiB; thanks IONOS! [ ]arm64
architecture workers from 24 to 16 in order to improve stability [ ], reduce the workers for amd64
from 32 to 28 and, for i386
, reduce from 12 down to 8 [ ].cache_dir
size setting to 16 GiB. [ ]systemd-oomd
as it unfortunately kills sshd
[ ]debootstrap
from backports when commisioning nodes. [ ]live_build_debian_stretch_gnome
, debsums-tests_buster
and debsums-tests_buster
jobs to the zombie list. [ ][ ]jekyll build
with the --watch
argument when building the Reproducible Builds website. [ ]rc.local
s Bash syntax so it can actually run [ ], commenting away some file cleanup code that is (potentially) deleting too much [ ] and fixing the html_brekages
page for Debian package builds [ ]. Finally, diagnosed and submitted a patch to add a AddEncoding gzip .gz
line to the tests.reproducible-builds.org Apache configuration so that Gzip files aren t re-compressed as Gzip which some clients can t deal with (as well as being a waste of time). [ ]
#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
ARMA_IGNORE_DEPRECATED_MARKER
. So among the
over 1100 packages using RcppArmadillo
at CRAN, around a good dozen or
so were flagged in the upload but CRAN concurred and let the package
migrate to CRAN.
If you maintain an affected package, consider applying the patch or
pull request now. A simple stop-gap measure also exists by adding
-DARMA_IGNORE_DEPRECATED_MARKER
to
src/Makevars
as either PKG_CPPFLAGS
or
PKG_CXXFLAGS
to reactivate it. But a proper code update,
which is generally simple, may be better. If you are unsure, do not
hesitate to get in touch.
The set of changes since the last CRAN release follows.
Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page. If you like this or other open-source work I do, you can sponsor me at GitHub.Changes in RcppArmadillo version 0.12.6.6.1 (2023-12-03)
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
-Wformat -Wformat-security
from the development branch of R. It also includes a new example snippet
illustrating creation of a numeric matrix.
The NEWS entry follows.
Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in tidyCpp version 0.0.7 (2023-11-30)
- Add an example for a numeric matrix creator
- Update the continuous integration setup
- Accomodate print format warnings from r-devel
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
printf
(or
alike) format checker found two more small issues. The run-time checker
for examples was unhappy with the callable bond example so we only run
it in interactive mode now. Lastly I had alread commented-out the
setting for a C++14 compilation (required by the remaining Boost
headers) as C++14 has been the default since R 4.2.0 (with suitable
compilers, at least). Those who need it explicitly will have to
uncomment the line in src/Makevars.in
. Lastly, the expand
printf
format strings also found a need for a small change
in Rcpp so the development version
(now 1.0.11.5) has that addressed; the change will be part of Rcpp
1.0.12 in January.
Courtesy of my CRANberries, there is also a diffstat report for the this release 0.4.20. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.Changes in RQuantLib version 0.4.20 (2023-11-26)
- Correct three help pages with stray curly braces
- Correct two printf format strings
- Comment-out explicit selection of C++14
- Wrap one example inside 'if (interactive())' to not exceed total running time limit at CRAN checks
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.
.gitignore
file, was bug 774109. It added a script to install the prerequisites to build Firefox on macOS (still called OSX back then), and that would print a message inviting people to obtain a copy of the source code with either Mercurial or Git. That was a precursor to current bootstrap.py
, from September 2012.
Following that, as far as I can tell, the first real incursion of Git in the Firefox source tree tooling happened in bug 965120. A few days earlier, bug 952379 had added a mach clang-format
command that would apply clang-format-diff
to the output from hg diff
. Obviously, running hg diff
on a Git working tree didn't work, and bug 965120 was filed, and support for Git was added there. That was in January 2014.
A year later, when the initial implementation of mach artifact
was added (which ultimately led to artifact builds), Git users were an immediate thought. But while they were considered, it was not to support them, but to avoid actively breaking their workflows. Git support for mach artifact
was eventually added 14 months later, in March 2016.
From gecko-dev to git-cinnabar
Let's step back a little here, back to the end of 2014. My user experience with Mercurial had reached a level of dissatisfaction that was enough for me to decide to take that script from a couple years prior and make it work for incremental updates. That meant finding a way to store enough information locally to be able to reconstruct whatever the incremental updates would be relying on (guess why other tools hid a local Mercurial clone under hood). I got something working rather quickly, and after talking to a few people about this side project at the Mozilla Portland All Hands and seeing their excitement, I published a git-remote-hg initial prototype on the last day of the All Hands.
Within weeks, the prototype gained the ability to directly push to Mercurial repositories, and a couple months later, was renamed to git-cinnabar. At that point, as a Git user, instead of cloning the gecko-dev repository from GitHub and switching to a local Mercurial repository whenever you needed to push to a Mercurial repository (i.e. the aforementioned Try server, or, at the time, for reviews), you could just clone and push directly from/to Mercurial, all within Git. And it was fast too. You could get a full clone of mozilla-central in less than half an hour, when at the time, other similar tools would take more than 10 hours (needless to say, it's even worse now).
Another couple months later (we're now at the end of April 2015), git-cinnabar became able to start off a local clone of the gecko-dev repository, rather than clone from scratch, which could be time consuming. But because git-cinnabar and the tool that was updating gecko-dev weren't producing the same commits, this setup was cumbersome and not really recommended. For instance, if you pushed something to mozilla-central with git-cinnabar from a gecko-dev clone, it would come back with a different commit hash in gecko-dev, and you'd have to deal with the divergence.
Eventually, in April 2020, the scripts updating gecko-dev were switched to git-cinnabar, making the use of gecko-dev alongside git-cinnabar a more viable option. Ironically(?), the switch occurred to ease collaboration with KaiOS (you know, the mobile OS born from the ashes of Firefox OS). Well, okay, in all honesty, when the need of syncing in both directions between Git and Mercurial (we only had ever synced from Mercurial to Git) came up, I nudged Mozilla in the direction of git-cinnabar, which, in my (biased but still honest) opinion, was the more reliable option for two-way synchronization (we did have regular conversion problems with hg-git, nothing of the sort has happened since the switch).
One Firefox repository to rule them all
For reasons I don't know, Mozilla decided to use separate Mercurial repositories as "branches". With the switch to the rapid release process in 2011, that meant one repository for nightly (mozilla-central), one for aurora, one for beta, and one for release. And with the addition of Extended Support Releases in 2012, we now add a new ESR repository every year. Boot to Gecko also had its own branches, and so did Fennec (Firefox for Mobile, before Android). There are a lot of them.
And then there are also integration branches, where developer's work lands before being merged in mozilla-central (or backed out if it breaks things), always leaving mozilla-central in a (hopefully) good state. Only one of them remains in use today, though.
I can only suppose that the way Mercurial branches work was not deemed practical. It is worth noting, though, that Mercurial branches are used in some cases, to branch off a dot-release when the next major release process has already started, so it's not a matter of not knowing the feature exists or some such.
In 2016, Gregory Szorc set up a new repository that would contain them all (or at least most of them), which eventually became what is now the mozilla-unified repository. This would e.g. simplify switching between branches when necessary.
7 years later, for some reason, the other "branches" still exist, but most developers are expected to be using mozilla-unified. Mozilla's CI also switched to using mozilla-unified as base repository.
Honestly, I'm not sure why the separate repositories are still the main entry point for pushes, rather than going directly to mozilla-unified, but it probably comes down to switching being work, and not being a top priority. Also, it probably doesn't help that working with multiple heads in Mercurial, even (especially?) with bookmarks, can be a source of confusion. To give an example, if you aren't careful, and do a plain clone of the mozilla-unified repository, you may not end up on the latest mozilla-central changeset, but rather, e.g. one from beta, or some other branch, depending which one was last updated.
Hosting is simple, right?
Put your repository on a server, install hgweb or gitweb, and that's it? Maybe that works for... Mercurial itself, but that repository "only" has slightly over 50k changesets and less than 4k files. Mozilla-central has more than an order of magnitude more changesets (close to 700k) and two orders of magnitude more files (more than 700k if you count the deleted or moved files, 350k if you count the currently existing ones).
And remember, there are a lot of "duplicates" of this repository. And I didn't even mention user repositories and project branches.
Sure, it's a self-inflicted pain, and you'd think it could probably(?) be mitigated with shared repositories. But consider the simple case of two repositories: mozilla-central and autoland. You make autoland use mozilla-central as a shared repository. Now, you push something new to autoland, it's stored in the autoland datastore. Eventually, you merge to mozilla-central. Congratulations, it's now in both datastores, and you'd need to clean-up autoland if you wanted to avoid the duplication.
Now, you'd think mozilla-unified would solve these issues, and it would... to some extent. Because that wouldn't cover user repositories and project branches briefly mentioned above, which in GitHub parlance would be considered as Forks. So you'd want a mega global datastore shared by all repositories, and repositories would need to only expose what they really contain. Does Mercurial support that? I don't think so (okay, I'll give you that: even if it doesn't, it could, but that's extra work). And since we're talking about a transition to Git, does Git support that? You may have read about how you can link to a commit from a fork and make-pretend that it comes from the main repository on GitHub? At least, it shows a warning, now. That's essentially the architectural reason why. So the actual answer is that Git doesn't support it out of the box, but GitHub has some backend magic to handle it somehow (and hopefully, other things like Gitea, Girocco, Gitlab, etc. have something similar).
Now, to come back to the size of the repository. A repository is not a static file. It's a server with which you negotiate what you have against what it has that you want. Then the server bundles what you asked for based on what you said you have. Or in the opposite direction, you negotiate what you have that it doesn't, you send it, and the server incorporates what you sent it. Fortunately the latter is less frequent and requires authentication. But the former is more frequent and CPU intensive. Especially when pulling a large number of changesets, which, incidentally, cloning is.
"But there is a solution for clones" you might say, which is true. That's clonebundles, which offload the CPU intensive part of cloning to a single job scheduled regularly. Guess who implemented it? Mozilla. But that only covers the cloning part. We actually had laid the ground to support offloading large incremental updates and split clones, but that never materialized. Even with all that, that still leaves you with a server that can display file contents, diffs, blames, provide zip archives of a revision, and more, all of which are CPU intensive in their own way.
And these endpoints are regularly abused, and cause extra load to your servers, yes plural, because of course a single server won't handle the load for the number of users of your big repositories. And because your endpoints are abused, you have to close some of them. And I'm not mentioning the Try repository with its tens of thousands of heads, which brings its own sets of problems (and it would have even more heads if we didn't fake-merge them once in a while).
Of course, all the above applies to Git (and it only gained support for something akin to clonebundles last year). So, when the Firefox OS project was stopped, there wasn't much motivation to continue supporting our own Git server, Mercurial still being the official point of entry, and git.mozilla.org was shut down in 2016.
The growing difficulty of maintaining the status quo
Slowly, but steadily in more recent years, as new tooling was added that needed some input from the source code manager, support for Git was more and more consistently added. But at the same time, as people left for other endeavors and weren't necessarily replaced, or more recently with layoffs, resources allocated to such tooling have been spread thin.
Meanwhile, the repository growth didn't take a break, and the Try repository was becoming an increasing pain, with push times quite often exceeding 10 minutes. The ongoing work to move Try pushes to Lando will hide the problem under the rug, but the underlying problem will still exist (although the last version of Mercurial seems to have improved things).
On the flip side, more and more people have been relying on Git for Firefox development, to my own surprise, as I didn't really push for that to happen. It just happened organically, by ways of git-cinnabar existing, providing a compelling experience to those who prefer Git, and, I guess, word of mouth. I was genuinely surprised when I recently heard the use of Git among moz-phab users had surpassed a third. I did, however, occasionally orient people who struggled with Mercurial and said they were more familiar with Git, towards git-cinnabar. I suspect there's a somewhat large number of people who never realized Git was a viable option.
But that, on its own, can come with its own challenges: if you use git-cinnabar without being backed by gecko-dev, you'll have a hard time sharing your branches on GitHub, because you can't push to a fork of gecko-dev without pushing your entire local repository, as they have different commit histories. And switching to gecko-dev when you weren't already using it requires some extra work to rebase all your local branches from the old commit history to the new one.
Clone times with git-cinnabar have also started to go a little out of hand in the past few years, but this was mitigated in a similar manner as with the Mercurial cloning problem: with static files that are refreshed regularly. Ironically, that made cloning with git-cinnabar faster than cloning with Mercurial. But generating those static files is increasingly time-consuming. As of writing, generating those for mozilla-unified takes close to 7 hours. I was predicting clone times over 10 hours "in 5 years" in a post from 4 years ago, I wasn't too far off. With exponential growth, it could still happen, although to be fair, CPUs have improved since. I will explore the performance aspect in a subsequent blog post, alongside the upcoming release of git-cinnabar 0.7.0-b1. I don't even want to check how long it now takes with hg-git or git-remote-hg (they were already taking more than a day when git-cinnabar was taking a couple hours).
I suppose it's about time that I clarify that git-cinnabar has always been a side-project. It hasn't been part of my duties at Mozilla, and the extent to which Mozilla supports git-cinnabar is in the form of taskcluster workers on the community instance for both git-cinnabar CI and generating those clone bundles. Consequently, that makes the above git-cinnabar specific issues a Me problem, rather than a Mozilla problem.
Taking the leap
I can't talk for the people who made the proposal to move to Git, nor for the people who put a green light on it. But I can at least give my perspective.
Developers have regularly asked why Mozilla was still using Mercurial, but I think it was the first time that a formal proposal was laid out. And it came from the Engineering Workflow team, responsible for issue tracking, code reviews, source control, build and more.
It's easy to say "Mozilla should have chosen Git in the first place", but back in 2007, GitHub wasn't there, Bitbucket wasn't there, and all the available options were rather new (especially compared to the then 21 years-old CVS). I think Mozilla made the right choice, all things considered. Had they waited a couple years, the story might have been different.
You might say that Mozilla stayed with Mercurial for so long because of the sunk cost fallacy. I don't think that's true either. But after the biggest Mercurial repository hosting service turned off Mercurial support, and the main contributor to Mercurial going their own way, it's hard to ignore that the landscape has evolved.
And the problems that we regularly encounter with the Mercurial servers are not going to get any better as the repository continues to grow. As far as I know, all the Mercurial repositories bigger than Mozilla's are... not using Mercurial. Google has its own closed-source server, and Facebook has another of its own, and it's not really public either. With resources spread thin, I don't expect Mozilla to be able to continue supporting a Mercurial server indefinitely (although I guess Octobus could be contracted to give a hand, but is that sustainable?).
Mozilla, being a champion of Open Source, also doesn't live in a silo. At some point, you have to meet your contributors where they are. And the Open Source world is now majoritarily using Git. I'm sure the vast majority of new hires at Mozilla in the past, say, 5 years, know Git and have had to learn Mercurial (although they arguably didn't need to). Even within Mozilla, with thousands(!) of repositories on GitHub, Firefox is now actually the exception rather than the norm. I should even actually say Desktop Firefox, because even Mobile Firefox lives on GitHub (although Fenix is moving back in together with Desktop Firefox, and the timing is such that that will probably happen before Firefox moves to Git).
Heck, even Microsoft moved to Git!
With a significant developer base already using Git thanks to git-cinnabar, and all the constraints and problems I mentioned previously, it actually seems natural that a transition (finally) happens. However, had git-cinnabar or something similarly viable not existed, I don't think Mozilla would be in a position to take this decision. On one hand, it probably wouldn't be in the current situation of having to support both Git and Mercurial in the tooling around Firefox, nor the resource constraints related to that. But on the other hand, it would be farther from supporting Git and being able to make the switch in order to address all the other problems.
But... GitHub?
I hope I made a compelling case that hosting is not as simple as it can seem, at the scale of the Firefox repository. It's also not Mozilla's main focus. Mozilla has enough on its plate with the migration of existing infrastructure that does rely on Mercurial to understandably not want to figure out the hosting part, especially with limited resources, and with the mixed experience hosting both Mercurial and git has been so far.
After all, GitHub couldn't even display things like the contributors' graph on gecko-dev until recently, and hosting is literally their job! They still drop the ball on large blames (thankfully we have searchfox for those).
Where does that leave us? Gitlab? For those criticizing GitHub for being proprietary, that's probably not open enough. Cloud Source Repositories? "But GitHub is Microsoft" is a complaint I've read a lot after the announcement. Do you think Google hosting would have appealed to these people? Bitbucket? I'm kind of surprised it wasn't in the list of providers that were considered, but I'm also kind of glad it wasn't (and I'll leave it at that).
I think the only relatively big hosting provider that could have made the people criticizing the choice of GitHub happy is Codeberg, but I hadn't even heard of it before it was mentioned in response to Mozilla's announcement. But really, with literal thousands of Mozilla repositories already on GitHub, with literal tens of millions repositories on the platform overall, the pragmatic in me can't deny that it's an attractive option (and I can't stress enough that I wasn't remotely close to the room where the discussion about what choice to make happened).
"But it's a slippery slope". I can see that being a real concern. LLVM also moved its repository to GitHub (from a (I think) self-hosted Subversion server), and ended up moving off Bugzilla and Phabricator to GitHub issues and PRs four years later. As an occasional contributor to LLVM, I hate this move. I hate the GitHub review UI with a passion.
At least, right now, GitHub PRs are not a viable option for Mozilla, for their lack of support for security related PRs, and the more general shortcomings in the review UI. That doesn't mean things won't change in the future, but let's not get too far ahead of ourselves. The move to Git has just been announced, and the migration has not even begun yet. Just because Mozilla is moving the Firefox repository to GitHub doesn't mean it's locked in forever or that all the eggs are going to be thrown into one basket. If bridges need to be crossed in the future, we'll see then.
So, what's next?
The official announcement said we're not expecting the migration to really begin until six months from now. I'll swim against the current here, and say this: the earlier you can switch to git, the earlier you'll find out what works and what doesn't work for you, whether you already know Git or not.
While there is not one unique workflow, here's what I would recommend anyone who wants to take the leap off Mercurial right now:
git-cinnabar
where mach bootstrap
would install it.
$ mkdir -p ~/.mozbuild/git-cinnabar
$ cd ~/.mozbuild/git-cinnabar
$ curl -sOL https://raw.githubusercontent.com/glandium/git-cinnabar/master/download.py
$ python3 download.py && rm download.py
git-cinnabar
to your PATH
. Make sure to also set that wherever you keep your PATH
up-to-date (.bashrc
or wherever else).
$ PATH=$PATH:$HOME/.mozbuild/git-cinnabar
$ git init
$ git remote add origin https://github.com/mozilla/gecko-dev
$ git remote update origin
$ git remote set-url origin hg::https://hg.mozilla.org/mozilla-unified
$ git config --local remote.origin.cinnabar-refs bookmarks
$ git remote update origin --prune
$ git -c cinnabar.refs=heads fetch hg::$PWD refs/heads/default/*:refs/heads/hg/*
This will create a bunch of hg/<sha1>
local branches, not all relevant to you (some come from old branches on mozilla-central). Note that if you're using Mercurial MQ, this will not pull your queues, as they don't exist as heads in the Mercurial repo. You'd need to apply your queues one by one and run the command above for each of them.$ git -c cinnabar.refs=bookmarks fetch hg::$PWD refs/heads/*:refs/heads/hg/*
This will create hg/<bookmark_name>
branches.
$ git reset $(git cinnabar hg2git $(hg log -r . -T ' node '))
This will take a little moment because Git is going to scan all the files in the tree for the first time. On the other hand, it won't touch their content or timestamps, so if you had a build around, it will still be valid, and mach build
won't rebuild anything it doesn't have to.
$ git branch <branch_name> $(git cinnabar hg2git <hg_sha1>)
At this point, you should have everything available on the Git side, and you can remove the .hg
directory. Or move it into some empty directory somewhere else, just in case. But don't leave it here, it will only confuse the tooling. Artifact builds WILL be confused, though, and you'll have to ./mach configure
before being able to do anything. You may also hit bug 1865299 if your working tree is older than this post.
If you have any problem or question, you can ping me on #git-cinnabar or #git on Matrix. I'll put the instructions above somewhere on wiki.mozilla.org, and we can collaboratively iterate on them.
Now, what the announcement didn't say is that the Git repository WILL NOT be gecko-dev, doesn't exist yet, and WON'T BE COMPATIBLE (trust me, it'll be for the better). Why did I make you do all the above, you ask? Because that won't be a problem. I'll have you covered, I promise. The upcoming release of git-cinnabar 0.7.0-b1 will have a way to smoothly switch between gecko-dev and the future repository (incidentally, that will also allow to switch from a pure git-cinnabar clone to a gecko-dev one, for the git-cinnabar users who have kept reading this far).
What about git-cinnabar?
With Mercurial going the way of the dodo at Mozilla, my own need for git-cinnabar will vanish. Legitimately, this begs the question whether it will still be maintained.
I can't answer for sure. I don't have a crystal ball. However, the needs of the transition itself will motivate me to finish some long-standing things (like finalizing the support for pushing merges, which is currently behind an experimental flag) or implement some missing features (support for creating Mercurial branches).
Git-cinnabar started as a Python script, it grew a sidekick implemented in C, which then incorporated some Rust, which then cannibalized the Python script and took its place. It is now close to 90% Rust, and 10% C (if you don't count the code from Git that is statically linked to it), and has sort of become my Rust playground (it's also, I must admit, a mess, because of its history, but it's getting better). So the day to day use with Mercurial is not my sole motivation to keep developing it. If it were, it would stay stagnant, because all the features I need are there, and the speed is not all that bad, although I know it could be better. Arguably, though, git-cinnabar has been relatively stagnant feature-wise, because all the features I need are there.
So, no, I don't expect git-cinnabar to die along Mercurial use at Mozilla, but I can't really promise anything either.
Final words
That was a long post. But there was a lot of ground to cover. And I still skipped over a bunch of things. I hope I didn't bore you to death. If I did and you're still reading... what's wrong with you? ;)
So this is the end of Mercurial at Mozilla. So long, and thanks for all the fish. But this is also the beginning of a transition that is not easy, and that will not be without hiccups, I'm sure. So fasten your seatbelts (plural), and welcome the change.
To circle back to the clickbait title, did I really kill Mercurial at Mozilla? Of course not. But it's like I stumbled upon a few sparks and tossed a can of gasoline on them. I didn't start the fire, but I sure made it into a proper bonfire... and now it has turned into a wildfire.
And who knows? 15 years from now, someone else might be looking back at how Mozilla picked Git at the wrong time, and that, had we waited a little longer, we would have picked some yet to come new horse. But hey, that's the tech cycle for you.
Ubuntu 23.10 Mantic Minotaur Desktop, showing network settings
We released Ubuntu 23.10 Mantic Minotaur on 12 October 2023, shipping its proven and trusted network stack based on Netplan. Netplan is the default tool to configure Linux networking on Ubuntu since 2016. In the past, it was primarily used to control the Server and Cloud variants of Ubuntu, while on Desktop systems it would hand over control to NetworkManager. In Ubuntu 23.10 this disparity in how to control the network stack on different Ubuntu platforms was closed by integrating NetworkManager with the underlying Netplan stack.
Netplan could already be used to describe network connections on Desktop systems managed by NetworkManager. But network connections created or modified through NetworkManager would not be known to Netplan, so it was a one-way street. Activating the bidirectional NetworkManager-Netplan integration allows for any configuration change made through NetworkManager to be propagated back into Netplan. Changes made in Netplan itself will still be visible in NetworkManager, as before. This way, Netplan can be considered the single source of truth for network configuration across all variants of Ubuntu, with the network configuration stored in /etc/netplan/
, using Netplan s common and declarative YAML format.
/etc/netplan/
. This way, the only thing administrators need to care about when managing a fleet of Desktop installations is Netplan. Furthermore, programmatic access to all network configuration is now easily accessible to other system components integrating with Netplan, such as snapd. This solution has already been used in more confined environments, such as Ubuntu Core and is now enabled by default on Ubuntu 23.10 Desktop.
/etc/NetworkManager/system-connections/
will automatically and transparently be migrated to Netplan s declarative YAML format and stored in its common configuration directory /etc/netplan/
.
The same migration will happen in the background whenever you add or modify any connection profile through the NetworkManager user interface, integrated with GNOME Shell. From this point on, Netplan will be aware of your entire network configuration and you can query it using its CLI tools, such as sudo netplan get
or sudo netplan status
without interrupting traditional NetworkManager workflows (UI, nmcli, nmtui, D-Bus APIs). You can observe this migration on the apt-get command line, watching out for logs like the following:
Setting up network-manager (1.44.2-1ubuntu1.1) ...
Migrating HomeNet (9d087126-ae71-4992-9e0a-18c5ea92a4ed) to /etc/netplan
Migrating eduroam (37d643bb-d81d-4186-9402-7b47632c59b1) to /etc/netplan
Migrating DebConf (f862be9c-fb06-4c0f-862f-c8e210ca4941) to /etc/netplan
In order to prepare for a smooth transition, NetworkManager tests were integrated into Netplan s continuous integration pipeline at the upstream GitHub repository. Furthermore, we implemented a passthrough method of handling unknown or new settings that cannot yet be fully covered by Netplan, making Netplan future-proof for any upcoming NetworkManager release.
eamanu
Arias, a fellow Argentinian Debian Developer from La Rioja, so I had the pleasure to travel with him.
To be honest Gunnar already did a wonderful blog post with many pictures, I should have taken more.
I had the opportunity to talk about device trees, and even look at Gunnar's machine one in order to find why a Display Port port was not working on a kernel but did in another. At the same time I also had time to start packaging qt6-grpc. Sadly I was there just one entire day, as I arrived on Thursday afternoon and had to leave on Saturday after lunch, but we did have a lot of quality Debian time.
I'll repeat here what Gunnar already wrote:
We had a long, important conversation about an important discussion that we are about to present on debian-vote@lists.debian.org
.
Stay tuned on that, I think this is something we should all get involved.
All in all I already miss hacking with people on the same room. Meetings for us mean a lot of distance to be traveled (well, I live far away of almost everything), but I really should try to this more often. Certainly more than just once every 15 years :-)
lisandro
), long-time maintainer of the Qt ecosystem, and one of our embedded
world extraordinaires. So, after we got him dry and fed him fresh river fishes,
he gave us a great impromptu talk about understanding and finding our way around
the Device Tree Source files for development boards and similar machines, mostly
in the ARM world.
From Argentina, we also had Emanuel (eamanu
) crossing all the way from La
Rioja.
I spent most of our first workday getting my laptop in shape to be useful as
the driver for my online class on Thursday (which is no small feat people that
know the particularities of my much loved ARM-based laptop will understand), and
running a set of tests again on my Raspberry Pi labortory, which I had not
updated in several months.
I am happy to say we are also finally also building Raspberry images for Trixie
(Debian 13, Testing)! Sadly, I managed
to burn my USB-to-serial-console (UART) adaptor, and could neither test those,
nor the oldstable ones we are still building (and will probably soon be dropped,
if not for anything else, to save disk space).
We enjoyed a lot of socialization time. An important highlight of the conference
for me was that we reconnected with a long-lost DD, Eduardo Tr pani, and got him
interested in getting involved in the project again! This second day, another
local Uruguayan, Mauricio, joined us together with his girlfriend,
Alicia, and Felipe came again to hang out with us. Sadly, we didn t get
photographic evidence of them (nor the permission to post it).
The nice house Santiago got for us was very well equipped for a
miniDebConf. There were a couple of rounds of pool played by those that enjoyed
it (I was very happy just to stand around, take some photos and enjoy the
atmosphere and the conversation).
Today (Saturday) is the last full-house day of miniDebConf; tomorrow we will be
leaving the house by noon. It was also a very productive day! We had a long,
important conversation about an important discussion that we are about to
present on debian-vote@lists.debian.org
.
It has been a great couple of days! Sadly, it s coming to an end But this at
least gives me the opportunity (and moral obligation!) to write a long blog
post. And to thank Santiago for organizing this, and Debian, for sponsoring our
trip, stay, foods and healthy enjoyment!
[ ] In March 2023, Ken gave the closing keynote [and] during the Q&A session, someone jokingly asked about the Turing award lecture, specifically can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?Although Ken reveals (or at least claims!) that he has no such backdoor, he does admit that he has the actual code which Russ requests and subsequently dissects in great but accessible detail.
Arch Linux packages become reproducible a median of 30 days quicker when compared to Debian packages, while Debian packages remain reproducible for a median of 68 days longer once fixed.A full PDF of their paper is available online, as are many other interesting papers on MCIS publication page.
nixos-minimal
image that is used to install NixOS. In their post, Arnout details what exactly can be reproduced, and even includes some of the history of this endeavour:
You may remember a 2021 announcement that the minimal ISO was 100% reproducible. While back then we successfully tested that all packages that were needed to build the ISO were individually reproducible, actually rebuilding the ISO still introduced differences. This was due to some remaining problems in the hydra cache and the way the ISO was created. By the time we fixed those, regressions had popped up (notably an upstream problem in Python 3.10), and it isn t until this week that we were back to having everything reproducible and being able to validate the complete chain.Congratulations to NixOS team for reaching this important milestone! Discussion about this announcement can be found underneath the post itself, as well as on Hacker News.
arm64
hardware from Codethink
Long-time sponsor of the project, Codethink, have generously replaced our old Moonshot-Slides , which they have generously hosted since 2016 with new KVM-based arm64
hardware. Holger Levsen integrated these new nodes to the Reproducible Builds continuous integration framework.
ext4
filesystem images. [ ]
SOURCE_DATE_EPOCH
environment variable in order to close bug #1034422. In addition, 8 reviews of packages were added, 74 were updated and 56 were removed this month, all adding to our knowledge about identified issues.
Bernhard M. Wiedemann published another monthly report about reproducibility within openSUSE.
edje_cc
(race condition)elasticsearch
(build failure)erlang-retest
(embedded .zip
timestamp)fdo-client
(embeds private keys)fftw3
(random ordering)gsoap
(date issue)gutenprint
(date)hub/golang
(embeds random build path)Hyprland
(filesystem issue)kitty
(sort-related issue, .tar
file embeds modification time)libpinyin
(ASLR)maildir-utils
(date embedded in copyright)mame
(order-related issue)mingw32-binutils
& mingw64-binutils
(date)MooseX
(date from perl-MooseX-App)occt
(sorting issue)openblas
(embeds CPU count)OpenRGB
(corruption-related issue)python-numpy
(random file names)python-pandas
(FTBFS)python-quantities
(date)python3-pyside2
(order)qemu
(date and Sphinx issue)qpid
(sorting problem)rakudo
(filesystem ordering issue)SLOF
(date-related issue)spack
(CPU counting issue)xemacs-packages
(date-related issue)file -i
returns text/plain
, fallback to comparing as a text file. This was originally filed as Debian bug #1053668) by Niels Thykier. [ ] This was then uploaded to Debian (and elsewhere) as version 251
.
#debian-reproducible-changes
IRC channel. [ ][ ][ ]systemd-oomd
on all Debian bookworm nodes (re. Debian bug #1052257). [ ]schroots
. [ ]arm64
machines from Codethink. [ ][ ][ ][ ][ ][ ]#reproducible-builds
on irc.oftc.net
.
rb-general@lists.reproducible-builds.org
struct drm_property
. We extended the color
management interface exposed to userspace by leveraging existing resources and
connecting them with driver-specific functions for managing modeset properties.
On the AMD DC layer, the interface with hardware color blocks is established.
The AMD DC layer contains OS-agnostic components that are shared across
different platforms, making it an invaluable resource. This layer already
implements hardware programming and resource management, simplifying the external
developer s task. While examining the DC code, we gain insights into the color
pipeline and capabilities, even without direct access to specifications.
Additionally, AMD developers provide essential support by answering queries and
reviewing our work upstream.
The primary challenge involved identifying and understanding relevant AMD DC
code to configure each color block in the color pipeline. However, the ultimate
goal was to bridge the DC color capabilities with the DRM API. For this, we
changed the AMD DM, the OS-dependent layer connecting the
DC interface to the DRM/KMS framework. We defined and managed driver-specific
color properties, facilitated the transport of user space data to the DC, and
translated DRM features and settings to the DC interface. Considerations were
also made for differences in the color pipeline based on hardware capabilities.
AMD s display driver supports the following pre-defined transfer functions (aka named fixed curves):
- OETF: the opto-electronic transfer function, which converts linear scene light into the video signal, typically within a camera.
- EOTF: electro-optical transfer function, which converts the video signal into the linear light output of the display.
- OOTF: opto-optical transfer function, which has the role of applying the rendering intent .
These capabilities vary depending on the hardware block, with some utilizing hardcoded curves and others relying on AMD s color module to construct curves from standardized coefficients. It also supports user/custom curves built from a lookup table.
- Linear/Unity: linear/identity relationship between pixel value and luminance value;
- Gamma 2.2, Gamma 2.4, Gamma 2.6: pure power functions;
- sRGB: 2.4: The piece-wise transfer function from IEC 61966-2-1:1999;
- BT.709: has a linear segment in the bottom part and then a power function with a 0.45 (~1/2.22) gamma for the rest of the range; standardized by ITU-R BT.709-6;
- PQ (Perceptual Quantizer): used for HDR display, allows luminance range capability of 0 to 10,000 nits; standardized by SMPTE ST 2084.
struct
dpp_color_caps
and struct
mpc_color_caps
.
The AMD Steam Deck hardware provides a tangible example of these capabilities.
Therefore, we take SteamDeck/DCN301 driver as an example and look at the Color
pipeline capabilities described in the file:
driver/gpu/drm/amd/display/dcn301/dcn301_resources.c
/* Color pipeline capabilities */
dc->caps.color.dpp.dcn_arch = 1; // If it is a Display Core Next (DCN): yes. Zero means DCE.
dc->caps.color.dpp.input_lut_shared = 0;
dc->caps.color.dpp.icsc = 1; // Intput Color Space Conversion (CSC) matrix.
dc->caps.color.dpp.dgam_ram = 0; // The old degamma block for degamma curve (hardcoded and LUT). Gamma correction is the new one.
dc->caps.color.dpp.dgam_rom_caps.srgb = 1; // sRGB hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.bt2020 = 1; // BT2020 hardcoded curve support (seems not actually in use)
dc->caps.color.dpp.dgam_rom_caps.gamma2_2 = 1; // Gamma 2.2 hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.pq = 1; // PQ hardcoded curve support
dc->caps.color.dpp.dgam_rom_caps.hlg = 1; // HLG hardcoded curve support
dc->caps.color.dpp.post_csc = 1; // CSC matrix
dc->caps.color.dpp.gamma_corr = 1; // New Gamma Correction block for degamma user LUT;
dc->caps.color.dpp.dgam_rom_for_yuv = 0;
dc->caps.color.dpp.hw_3d_lut = 1; // 3D LUT support. If so, it's always preceded by a shaper curve.
dc->caps.color.dpp.ogam_ram = 1; // Blend Gamma block for custom curve just after blending
// no OGAM ROM on DCN301
dc->caps.color.dpp.ogam_rom_caps.srgb = 0;
dc->caps.color.dpp.ogam_rom_caps.bt2020 = 0;
dc->caps.color.dpp.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.dpp.ogam_rom_caps.pq = 0;
dc->caps.color.dpp.ogam_rom_caps.hlg = 0;
dc->caps.color.dpp.ocsc = 0;
dc->caps.color.mpc.gamut_remap = 1; // Post-blending CTM (pre-blending CTM is always supported)
dc->caps.color.mpc.num_3dluts = pool->base.res_cap->num_mpc_3dlut; // Post-blending 3D LUT (preceded by shaper curve)
dc->caps.color.mpc.ogam_ram = 1; // Post-blending regamma.
// No pre-defined TF supported for regamma.
dc->caps.color.mpc.ogam_rom_caps.srgb = 0;
dc->caps.color.mpc.ogam_rom_caps.bt2020 = 0;
dc->caps.color.mpc.ogam_rom_caps.gamma2_2 = 0;
dc->caps.color.mpc.ogam_rom_caps.pq = 0;
dc->caps.color.mpc.ogam_rom_caps.hlg = 0;
dc->caps.color.mpc.ocsc = 1; // Output CSC matrix.
struct dpp_color_caps
,
struct mpc_color_caps
and struct rom_curve_caps
.
Now, using this guideline, we go through color capabilities of DPP and MPC blocks and talk more
about mapping driver-specific properties to corresponding color blocks.
dc->caps.color.dpp.dcn_arch
dc->caps.color.dpp.dgam_ram
, dc->caps.color.dpp.dgam_rom_caps
,dc->caps.color.dpp.gamma_corr
AMD Plane Degamma data is mapped to the initial stage of the DPP pipeline. It
is utilized to transition from scanout/encoded values to linear values for
arithmetic operations. Plane Degamma supports both pre-defined transfer
functions and 1D LUTs, depending on the hardware generation. DCN2 and older
families handle both types of curve in the Degamma RAM block
(dc->caps.color.dpp.dgam_ram
); DCN3+ separate hardcoded curves and 1D LUT
into two block: Degamma ROM (dc->caps.color.dpp.dgam_rom_caps
) and Gamma
correction block (dc->caps.color.dpp.gamma_corr
), respectively.
Pre-defined transfer functions:
struct drm_color_lut
elements. Setting TF = Identity/Default and LUT as
NULL means bypass.
References:
struct drm_color_ctm_3x4
. Setting NULL means bypass.
References:
dc->caps.color.dpp.hw_3d_lut
The Shaper block fine-tunes color adjustments before applying the 3D LUT,
optimizing the use of the limited entries in each dimension of the 3D LUT. On
AMD hardware, a 3D LUT always means a preceding shaper 1D LUT used for
delinearizing and/or normalizing the color space before applying a 3D LUT, so
this entry on DPP color caps dc->caps.color.dpp.hw_3d_lut
means support for
both shaper 1D LUT and 3D LUT.
Pre-defined transfer function enables delinearizing content with or without
shaper LUT, where AMD color module calculates the resulted shaper curve. Shaper
curves go from linear values to encoded values. If we are already in a
non-linear space and/or don t need to normalize values, we can set a Identity TF
for shaper that works similar to bypass and is also the default TF value.
Pre-defined transfer functions:
calculate_curve()
function in the file
amd/display/modules/color/color_gamma.c
.struct drm_color_lut
elements. When setting Plane Shaper TF (!= Identity)
and LUT at the same time, the color module will combine the pre-defined TF and
the custom LUT values into the LUT that s actually programmed. Setting TF =
Identity/Default and LUT as NULL works as bypass.
References:
dc->caps.color.dpp.hw_3d_lut
The 3D LUT in the DPP block facilitates complex color transformations and
adjustments. 3D LUT is a three-dimensional array where each element is an RGB
triplet. As mentioned before, the dc->caps.color.dpp.hw_3d_lut
describe if
DPP 3D LUT is supported.
The AMD driver-specific property advertise the size of a single dimension via
LUT3D_SIZE
property. Plane 3D LUT is a blog property where the data is interpreted
as an array of struct drm_color_lut
elements and the number of entries is
LUT3D_SIZE
cubic. The array contains samples from the approximated function.
Values between samples are estimated by tetrahedral interpolation
The array is accessed with three indices, one for each input dimension (color
channel), blue being the outermost dimension, red the innermost. This
distribution is better visualized when examining the code in
[RFC PATCH 5/5] drm/amd/display: Fill 3D LUT from userspace by Alex Hung:
+ for (nib = 0; nib < 17; nib++)
+ for (nig = 0; nig < 17; nig++)
+ for (nir = 0; nir < 17; nir++)
+ ind_lut = 3 * (nib + 17*nig + 289*nir);
+
+ rgb_area[ind].red = rgb_lib[ind_lut + 0];
+ rgb_area[ind].green = rgb_lib[ind_lut + 1];
+ rgb_area[ind].blue = rgb_lib[ind_lut + 2];
+ ind++;
+
+
+
+ /* Stride and bit depth are not programmable by API yet.
+ * Therefore, only supports 17x17x17 3D LUT (12-bit).
+ */
+ lut->lut_3d.use_tetrahedral_9 = false;
+ lut->lut_3d.use_12bits = true;
+ lut->state.bits.initialized = 1;
+ __drm_3dlut_to_dc_3dlut(drm_lut, drm_lut3d_size, &lut->lut_3d,
+ lut->lut_3d.use_tetrahedral_9,
+ MAX_COLOR_3DLUT_BITDEPTH);
dc->caps.color.dpp.ogam_ram
The Blend/Out Gamma block applies the final touch-up before blending, allowing
users to linearize content after 3D LUT and just before the blending. It supports both 1D LUT
and pre-defined TF. We can see Shaper and Blend LUTs as 1D LUTs that are
sandwich the 3D LUT. So, if we don t need 3D LUT transformations, we may want
to only use Degamma block to linearize and skip Shaper, 3D LUT and Blend.
Pre-defined transfer function:
struct drm_color_lut
elements. If plane_blend_tf_property
!= Identity TF,
AMD color module will combine the user LUT values with pre-defined TF into the
LUT parameters to be programmed. Setting TF = Identity/Default and LUT to NULL
means bypass.
References:
struct drm_color_lut
elements. Setting NULL means bypass.
Not really supported. The driver is currently reusing the DPP degamma LUT block
(dc->caps.color.dpp.dgam_ram
and dc->caps.color.dpp.gamma_corr
) for
supporting DRM CRTC Degamma LUT, as explaning by [PATCH v3 20/32]
drm/amd/display: reject atomic commit if setting both plane and CRTC
degamma.
dc->caps.color.mpc.gamut_remap
It sets the current transformation matrix (CTM) apply to pixel data after the
lookup through the degamma LUT and before the lookup through the gamma LUT. The
data is interpreted as a struct drm_color_ctm
. Setting NULL means bypass.
dc->caps.color.mpc.ogam_ram
After all that, you might still want to convert the content to wire encoding.
No worries, in addition to DRM CRTC 1D LUT, we ve got a AMD CRTC gamma transfer
function (TF) to make it happen. Possible TF values are defined by enum
amdgpu_transfer_function
.
Pre-defined transfer functions:
struct drm_color_lut
elements. When setting CRTC Gamma TF (!= Identity)
and LUT at the same time, the color module will combine the pre-defined TF and
the custom LUT values into the LUT that s actually programmed. Setting TF =
Identity/Default and LUT to NULL means bypass.
References:
color_range
and color_encoding
properties. It is used for color space
conversion of the input content. On the other hand, we have de DC Output CSC
(OCSC) sets pre-defined coefficients from DRM connector colorspace
properties. It is uses for color space conversion of the composed image to the
one supported by the sink.
References:
pg_basebackup
) are protected, but the vast majority of attacks aren t stopped by TDE.
Any attacker who can access the database while it s running can just ask for an SQL-level dump of the stored data, and they ll get the unencrypted data quick as you like.
pg_crypto
PostgreSQL ships a contrib module called pg_crypto
, which provides encryption and decryption functions.
This sounds ideal to use for encrypting data within our applications, as it s available no matter what we re using to write our application.
It avoids the problem of framework-specific cryptography, because you call the same PostgreSQL functions no matter what language you re using, which produces the same output.
However, I don t recommend ever using pg_crypto
s data encryption functions, and I doubt you will find many other cryptographic engineers who will, either.
First up, and most horrifyingly, it requires you to pass the long-term keys to the database server.
If there s an attacker actively in the database server, they can capture the keys as they come in, which means all the data encrypted using that key is exposed.
Sending the keys can also result in the keys ending up in query logs, both on the client and server, which is obviously a terrible result.
Less scary, but still very concerning, is that pg_crypto
s available cryptography is, to put it mildly, antiquated.
We have a lot of newer, safer, and faster techniques for data encryption, that aren t available in pg_crypto
.
This means that if you do use it, you re leaving a lot on the table, and need to have skilled cryptographic engineers on hand to avoid the potential pitfalls.
In short: friends don t let friends use pg_crypto
.
Next.